Iterative approach to deployments#

Progressive delivery is a deployment practice that aims to roll out new features gradually. It enforces the gradual release of a feature while, at the same time, trying to avoid downtime. It’s an iterative approach to deployments.

There we go. That’s the definition. It’s intentionally broad because progressive delivery encompasses quite a few practices, like blue/green deployments, rolling updates, canary deployments, and so on.

Soon we’ll see what it looks like in practice. For now, the most crucial question is not what it is, but why do we want it and which problems does it solve?

Traditional deployment mechanism#

The “traditional” deployment mechanism consists of shutting down the old release and deploying a new one in its place. Sometimes it’s called “big bang” deployment, even though the more commonly used term is “recreate strategy.”

Issues with “big bang” deployments#

The major problem with big bang deployments is downtime. In between shutting down the old release and deploying a new one in its place, there’s a period during which the application isn’t available. That’s the time when neither the old nor the new releases are running. Users don’t like that, and businesses hate the idea of not being able to serve users. Downtime might be one of the main reasons why we had infrequent releases in the past.

If there’s inevitable downtime associated with the deployment of new releases, it makes sense not to do it often. The fewer releases we do during a year, the less downtime caused by new deployments we have. But that’s also bad. Users don’t like it when a service isn’t available, but they also don’t like not getting new features, nor are they thrilled with the prospect of not having bugs fixed.

Why big bang is still in use#

Why would anyone use the big bang deployment strategy if it produces downtime? There are two common answers to that question.

To begin with, we might not know that there are better ways to deploy software. That’s an easy problem to solve. All we have to do is continue reading, and we’ll soon learn how to do it better. But, there’s a more problematic reason for using that strategy.

Sometimes, deploying new releases in a way that produces downtime is the only option we have. Sometimes, the architecture of our applications does not permit anything but the shut-it-down-first-and-deploy-later approach.

Our approach#

To begin with, if an application can’t scale, there’s no other option. It’s impossible to deploy a new release with zero downtime without running at least two replicas of an application in parallel. No matter what zero-downtime deployment strategy we choose, the old and new releases will run concurrently, even if it’s just for few milliseconds. If an application cannot scale horizontally, it can’t have more than one replica.

But, horizontal scaling is easy to solve, we might think. All we have to do is set the replicas field of a Kubernetes Deployment or a StatefulSet to a value higher than 1, and voila, the problem is solved. Right?

To begin with, stateful applications that can’t replicate data between replicas can’t scale horizontally. Otherwise, the effects of scaling would be catastrophic. That means we either have to change our application to be stateless or figure out how to replicate data. The latter option is a horrible one and close to impossible to do well. Consistent and reliable replication is hard. It’s so complicated that even some databases have a hard time doing that. Accomplishing that for our apps is a waste of time. It’s much easier and better to use an external database for all the state of our applications, and, as a result, they’ll become stateless.

Backward compatibility#

So, we might be inclined to think that if an application can scale, it can use progressive delivery. That would be too hasty. There’s more to it than being stateless.

Progressive delivery means not only that multiple replicas of an application need to run in parallel but also that two releases will run in parallel. That means there’s no guarantee which version a user will see nor with which release other applications are communicating. As a result, each release needs to be backward compatible. It doesn’t matter whether it’s a change to the schema of a database, a change to the API contract, a change to the frontend, etc. If it’s going to be deployed without downtime, two releases will run in parallel, even if only for a few milliseconds, and there’s no telling which one the user or process will hit. Even if we employ feature toggles (or feature flags, or whatever else we call it these days), backward compatibility is a must.

Where have we gotten so far#

We know that horizontal scaling and backward compatibility are requirements. They’re unavoidable. Those aren’t the only requirements for progressive delivery, but they should be a good start. Others are mostly based on specific processes and architecture of our applications. Nevertheless, we’ll skip commenting on them since they often vary from one case to another.

Optionally, we might need to have continuous delivery or continuous deployment pipelines. We might need to have a firm grasp of traffic management. We might have to invest in observability and alerting. Many of these things aren’t strict requirements for progressive delivery, but our lives will only become harder without them. That’s not the outcome we should strive for. We’ll go through some of those when we reach the practical examples.

Constraints with progressive delivery#

For now, what matters is that progressive delivery is not a practice well-suited for immature teams. It requires a high level of experience. It might not look that way initially, but when we reach production and, especially when we’re dealing with a large scale, things can quickly get out of hand if the processes and the tools we’re typically using aren’t accompanied by extensive experience and high maturity.

All in all, progressive delivery is an advanced technique of deploying new releases that lowers the risk, reduces the blast radius of potential issues, allows us to test in production, and so on and so forth.

Now, let’s go back to the initial promise of going through theory fast. There’s only one crucial question left to answer before moving to the fun part.

Which types of progressive delivery do we have?#

Progressive delivery is an umbrella for different deployment practices, with only one thing in common. They all roll out new releases progressively. That can be over a few milliseconds, a few minutes, or even a few days. The duration varies from one case to another.

What matters is that progressive delivery is an iterative approach to the deployment process. It can be rolling updates, blue/green deployments, canary deployments, or quite a few others. They’re all variations of the same idea.

We won’t explore the recreate strategy since you’re either already using it, and hence you know what it is, or you don’t need it because you work in a company that doesn’t have any legacy applications.

Rolling updates strategy#

Similarly, we won’t go through the rolling updates strategy because it’s likely that you’re already using it. That’s the default deployment strategy in Kubernetes. Nearly all the examples from the other sections used it.

Finally, we won’t explore blue/green deployments. They made sense in the past when we had the static infrastructure, when applications were mutable, when it was expensive to roll back, and so on and so forth. It was the first commonly used progressive delivery strategy that made a lot of sense, but not anymore.

That leaves us with only one progressive delivery strategy worth exploring: canary deployments.

Hands-On: Deploying Applications Using GitOps Principles

Using Argo Rollouts to Deploy Applications